22 results
Notes
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 316-333
-
- Chapter
- Export citation
Preface
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp xiii-xviii
-
- Chapter
- Export citation
-
Summary
Artificial intelligence has aroused debate ever since Hubert Dreyfus wrote his controversial report, Alchemy and Artificial Intelligence (1965). Philosophers and social scientists who have been influenced by European critical thought have often viewed AI models through philosophical lenses and found them scandalously bad. AI people, for their part, often do not recognize their methods in the interpretations of the critics, and as a result they have sometimes regarded their critics as practically insane.
When I first became an AI person myself, I paid little attention to the critics. As I tried to construct AI models that seemed true to my own experience of everyday life, however, I gradually concluded that the critics were right. I now believe that the substantive analysis of human experience in the main traditions of AI research is profoundly mistaken. My reasons for believing this, however, differ somewhat from those of Dreyfus and other critics, such as Winograd and Flores (1986). Whereas their concerns focus on the analysis of language and rules, my own concerns focus on the analysis of action and representation, and on the larger question of human beings' relationships to the physical environment in which they conduct their daily lives. I believe that people are intimately involved in the world around them and that the epistemological isolation that Descartes took for granted is untenable. This position has been argued at great length by philosophers such as Heidegger and Merleau-Ponty; I wish to argue it technologically.
This is a formidable task, given that many AI people deny that such arguments have any relevance to their research.
14 - Conclusion
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 302-315
-
- Chapter
- Export citation
-
Summary
Discourse and practice
My argument throughout has turned on an analysis of certain metaphors underlying AI research. This perspective, while limited, provides one set of tools for a critical technical practice. I hope to have conveyed a concrete sense of the role of critical self-awareness in technical work: not just as a separate activity of scholars and critics, but also as an integral part of a technical practitioner's everyday work. By attending to the metaphors of a field, I have argued, it becomes possible to make greater sense of the practical logic of technical work. Metaphors are not misleading or illogical; they are simply part of life. What misleads, rather, is the misunderstanding of the role of metaphor in technical practice. Any practice that loses track of the figurative nature of its language loses consciousness of itself. As a consequence, it becomes incapable of performing the feats of self-diagnosis that become necessary as old ideas reach their limits and call out for new ones to take their place. No finite procedure can make this cycle of diagnosis and revision wholly routine, but articulated theories of discourses and practices can certainly help us to avoid some of the more straightforward impasses.
Perhaps “theories” is not the right word, though, since the effective instrument of critical work is not abstract theorization; rather it is the practitioner's own cultivated awareness of language and ways it is used. The analysis of mentalism, for example, has demonstrated how a generative metaphor can distribute itself across the whole of a discourse.
4 - Abstraction and implementation
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 66-88
-
- Chapter
- Export citation
-
Summary
Structures of computation
All engineering disciplines employ mathematics to represent the physical artifacts they create. The discipline of computing, however, has a distinctive understanding of the role of mathematics in design. Mathematical models can provide a civil engineer with some grounds for confidence that a bridge will stand while the structure is still on paper, but the bridge itself only approximates the math. The computer, by contrast, conforms precisely to a mathematically defined relationship between its inputs and its outputs. Moreover, a civil engineer is intricately constrained by the laws of physics: only certain structures will stand up, and it is far from obvious exactly which ones. The computer engineer, by contrast, can be assured of realizing any mathematical structure at all, as long as it is finite and enough money can be raised to purchase the necessary circuits.
The key to this remarkable state of affairs is the digital abstraction: the discrete 0s and 1s out of which computational structures are built. This chapter and the next will describe the digital abstraction and its elaborate and subtle practical logic in the history of computer engineering and cognitive science. The digital abstraction is the technical basis for the larger distinction in computer work between abstraction (the functional definition of artifacts) and implementation (their actual physical construction). Abstraction and implementation are defined reciprocally: an abstraction is abstracted from particular implementations and an implementation is an implementation of a particular abstraction. This relationship is asymmetrical: a designer can specify an abstraction in complete detail without making any commitments about its implementation. The relationship is confined to the boundaries of the computer; it does not depend on anything in the outside world.
5 - The digital abstraction
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 89-104
-
- Chapter
- Export citation
-
Summary
Digital logic
My goal in this chapter, as in much of this book, depends on who you are. If you have little technical background, my purpose is to help prepare you for the next few chapters by familiarizing you with the building blocks from which computers are made, together with the whole style of reasoning that goes with them. If you are comfortable with this technology and style of thinking, my goal is to help defamiliarize these things, as part of the general project of rethinking computer science in general and AI in particular (cf. Bolter 1984: 66–79).
Modern computers are made of digital logic circuits (Clements 1991). The technical term “logic” can refer either to the abstract set of logical formulas that specify a computer's function or to the physical circuitry that implements those formulas. In each case, logic is a matter of binary arithmetic. The numerical values of binary arithmetic, conventionally written with the numerals 1 and 0, are frequently glossed using the semantic notions of “true” and “false.” In practice, this terminology has a shifting set of entailments. Sometimes “true” and “false” refer to nothing more than the arithmetic of 1 and 0. Sometimes they are part of the designer's metaphorical use of intentional vocabulary in describing the workings of computers. And sometimes they are part of a substantive psychological theory whose origin is Boole's nineteenth-century account of human reasoning as the calculation of the truth values of logical propositions (Boole 1854).
11 - Representation and indexicality
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 222-240
-
- Chapter
- Export citation
-
Summary
World models
As an agent gets along in the world, its actions are plainly about the world in some sense. In picking up a cup, I am not just extending my forearm and adjusting my fingers: those movements can be parsimoniously described only in relation to the cup and the ways that cups are used. A conversation about a malfunctioning refrigerator, likewise, really is about that refrigerator; it is not just a series of noises or grammatical constructions. When someone is studying maps and contemplating which road to take, it is probably impossible to provide any coherent account of what that person is doing except in relation to those roads.
AI researchers have understood these phenomena in terms of representations: actions, discussions, and thoughts are held to relate to particular things in the world because they involve mental representations of those things. It can hardly be denied that people do employ representations of various sorts, from cookbooks and billboards to internalized speech and the retinotopic maps of early vision. But the mentalist computational theory of representation has been simultaneously broader and more specific. In this chapter I will discuss the nature and origins of this theory, as well as some reasons to doubt its utility as part of a theory of activity. Chapter 12 will suggest that the primordial forms of representation are best understood as facets of particular time-extended patterns of interaction with the physical and social world. Later sections of the present chapter will prepare some background for this idea by discussing indexicality (the dependence of reference on time and place) and the more fundamental phenomenon of intentionality (the “aboutness” of actions, discussions, and thoughts).
6 - Dependency maintenance
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 105-123
-
- Chapter
- Export citation
-
Summary
Critical technical practice
Having prepared some background, let us now consider a technical exercise. Since readers from different disciplinary backgrounds will bring contrasting expectations to an account of technical work, I will begin by reviewing the critical spirit in which the technical exercises in this book are intended.
Reflexively, the point is not to start over from scratch, throwing out the whole history of technical work and replacing it with new mechanisms and methods. Such a clean break would be impossible. The inherited practices of computational work form a massive network in which each practice tends to reinforce the others. Moreover, a designer who wishes to break with these practices must first become conscious of them, and nobody can expect to become conscious of a whole network of inherited habits and customs without considerable effort and many false starts. A primary goal of critical technical work, then, is to cultivate awareness of the assumptions that lie implicit in inherited technical practices. To this end, it is best to start by applying the most fundamental and familiar technical methods to substantively new ends. Such an effort is bound to encounter a world of difficulties, and the most valuable intellectual work consists in critical reflection upon the reductio ad absurdum of conventional methods. Ideally this reflexive work will make previously unreflected aspects of the practices visible, thus raising the question of what alternatives might be available.
Substantively, the goal is to see what happens in the course of designing a device that interacts with its surroundings. Following the tenets of interactionist methodology, the focus is not on complex new machinery but on the dynamics of a relatively simple architecture's engagement with an environment.
Computation and Human Experience
- Philip E. Agre
-
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997
-
This book offers a critical reconstruction of the fundamental ideas and methods of artificial intelligence research. Through close attention to the metaphors of artificial intelligence and their consequences for the field's patterns of success and failure, it argues for a reorientation of the field away from thought in the head and towards activity in the world. By considering computational ideas in a large, philosophical framework, the author eases critical dialogue between technology and the social sciences. AI can benefit from an understanding of the field in relation to human nature, and in return, it offers a powerful mode of investigation into the practicalities of physical realization. This is one of a series of publications associated with the Earth System Governance Project. For more publications, see www.cambridge.org/earth-system-governance.
3 - Machinery and dynamics
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 49-65
-
- Chapter
- Export citation
-
Summary
Mentalism
As a substantive matter, the discourse of cognitive science has a generative metaphor, according to which every human being has an abstract inner space called a “mind.” The metaphor system of “inside,” which Lakoff and Johnson (1980) call the CONTAINER metaphor, is extraordinarily rich. “Inside” is opposed to “outside,” usually in the form of the “outside world,” which sometimes includes the “body” and sometimes does not. This inner space has a boundary that is traversed by “stimuli” or “perception” (headed inward) and “responses” or “behavior” (headed outward). It also has “contents” – mental structures and processes – which differ in kind from the things in the outside world. Though presumably somehow realized in the physical tissue of the brain, these contents are abstract in nature. They stand in a definite but uncomfortable relation to human experiences of sensation, conception, recognition, intention, and desire. This complex of metaphors is historically continuous with the most ancient Western conceptions of the soul (Dodds 1951; Onians 1954) and the philosophy of the early Christian Platonists. It gradually became a secular idea in the development of mechanistic philosophy among the followers of Descartes. In its most recent formulation, the mind figures in a particular technical discourse, the outlines of which I indicated in Chapter 1.
This metaphor system of inside and outside organizes a special understanding of human existence that I will refer to as mentalism. I am using the term “mentalism” in an unusually general way. The psychological movements of behaviorism and cognitivism, despite their mutual antagonism, both subscribe to the philosophy of mentalism.
Frontmatter
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp i-viii
-
- Chapter
- Export citation
9 - Running arguments
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 160-178
-
- Chapter
- Export citation
-
Summary
From plans to arguments
Critical analysis is necessary and valuable, but the progress of intellectual work always turns out to be underlain by deep continuities. Technical work in particular will always pick up again where it left off, hopefully the wiser but nonetheless constrained by the great mass of established technique. Critics interrogating the existing techniques may discover a whole maze of questionable assumptions underneath them, but that discovery in itself does not make the techniques any easier to replace. I will not try to throw the existing techniques of AI out the window and start over; that would be impossible. Instead, I want to work through the practical logic of planning research, continuing to force its internal tensions to the surface as a means of clearing space for alternatives. My starting place is Fikes, Hart, and Nilsson's suggestion (quoted in Chapter 8) that the construction and execution of plans occur in rapid alternation. This suggestion is the reductio ad absurdum of the view that activity is organized through the construction and execution of plans. The absurdity has two levels. On a substantive level, the distinction between planning and execution becomes problematic; “planning” and “execution” become fancy names for “thinking” and “doing,” which in turn become two dynamically interrelated aspects of the same process. On a technical level, the immense costs involved in constructing new plans are no longer amortized across a relatively long period of execution. Even without going to the extreme of constant alternation between planning and execution, Fikes, Hart, and Nilsson still felt the necessity of heroic measures for amortizing the costs of plan construction. These took the form of complex “editing” procedures that annotated and generalized plans, stored them in libraries, and facilitated their retrieval in future situations.
12 - Deictic representation
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 241-259
-
- Chapter
- Export citation
-
Summary
Deictic intentionality
As the intellectual history sketched in Chapter 11 makes clear, AI research has been based on definite but only partly articulated views about the nature and purpose of representation. Representations in an agent's mind have been understood as models that correspond to the outside world through a systematic mapping. As a result, the meanings of an agent's representations can be determined independently of its zcurrent location, attitudes, or goals. Reference has been a marginal concern within this picture, either assimilated to sense or simply posited through the operation of simulated worlds in which symbols automatically connect to their referents. One consequence of this picture is that indexicality has been almost entirely absent from AI research. And the model-theoretic understanding of representational semantics has made it unclear how we might understand the concrete relationships between a representation-owning agent and the environment in which it conducts its activities.
In making such complaints, one should not confuse the articulated conceptions that inform technical practice with the reality of that practice. As Smith (1987) has pointed out, any device that engages in any sort of interaction with its environment will exhibit some kind of indexicality. For example, a thermometer's reading does not indicate abstractly “the temperature,” since it is the temperature somewhere, nor does it indicate concretely “the temperature in room 11,” since if we moved it to room 23 it would soon indicate the temperature in room 23 instead. Instead, we need to understand the thermometer as indicating “the temperature here” – regardless of whether the thermometer's designers thought in those terms.
10 - Experiments with running arguments
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 179-221
-
- Chapter
- Export citation
-
Summary
Motivation
This chapter demonstrates RA, the computer program introduced in Chapter 9 that illustrates the concept of running arguments. RA has three motivations, which might be reduced to slogans as follows:
It is best to know what you're doing. Plan execution – in the conventional sense, where plans are similar to computer programs and execution is a simple, mechanical process – is inflexible because individual actions are derived from the symbols in a plan, not from an understanding of the current situation. The device that constructed the plan once had a hypothetical understanding of why the prescribed action might turn out to be the right one, but that understanding is long gone. Flexible action in a world of contingency relies on an understanding of the current situation and its consequences.
You're continually redeciding what to do. Decisions about action typically depend on a large number of implicit or explicit premises about both the world and yourself. Since any one of those premises might change, it is important to keep your reasoning up to date. Each moment's actions should be based, to the greatest extent possible, on a fresh reasoning–through of the current situation.
All activity is mostly routine. Almost everything you do during the day is something you have done before. This is not to say that you switch back and forth between two modes, one for routine situations and one for the occasional novel situation. Even when something novel is happening, the vast majority of what you are doing is routine.
As a matter of computational modeling, all of this is more easily said than done. This chapter explains how RA instantiates these three ideals.
Contents
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp ix-xii
-
- Chapter
- Export citation
7 - Rule system
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 124-141
-
- Chapter
- Export citation
-
Summary
Using dependencies in a rule system
This chapter discusses one use of dependencies, a programming language called Life. Although the demonstrations of Chapter 9 and 10 will use Life to make some points about improvised activity, this chapter describes Life programming as a technical matter with little reference to theoretical context. Readers who find these descriptions too involved ought to be able to skip ahead without coming to harm.
Life is a rule language, a simplified version of the Amord language (de Kleer, Doyle, Rich, Steele, and Sussman 1978). This means that a Life “program” consists of a set of rules. Each rule continually monitors the contents of a database of propositions, and sometimes the rules place new propositions in the database. The program that does all of the bookkeeping for this process is called the rule system. The rule system functions as the reasoner for a dependency system. The rule system and dependency system both employ the same database, and most of the propositions in the database have a value of IN or OUT. Roughly speaking, when an IN proposition (the trigger) matches the pattern of an IN rule, the rule fires and the appropriate consequence is assigned the value of IN. If necessary, the system first builds the consequent proposition and inserts it in the database. This might cause other rules to fire in turn, until the whole system settles down. In computer science terms, this is a forward–chaining rule system. The role of dependencies is to accelerate this settling down without changing its outcome. The technical challenge is to get the rule system to mesh smoothly with the dependency maintenance system underneath.
One might take two views of the Life rule system in operation. On one view, dependencies are accelerating the operation of rules.
Subject index
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 366-371
-
- Chapter
- Export citation
8 - Planning and improvisation
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 142-159
-
- Chapter
- Export citation
-
Summary
The idea of planning
For the past thirty years or so, computational theorizing about action has generally been conducted under the rubric of “planning.” Whereas other computational terms such as “knowledge” and “action” and “truth” come to us burdened with complex intellectual histories, the provenance of “plan” and “planning” as technical terms is easy to trace. Doing so will not provide a clear definition of the word “planning” as it is used in AI discourse, for none exists. It will, however, permit us to sort the issues and prepare the ground for new ideas. My exposition will not follow a simple chronological path, because the technical history itself contains significant contradictions; these derive from tensions within the notion of planning.
In reconstructing the history of “plan” and “planning” as computational terms, the most important road passes through Lashley's “serial order” paper (1951) and then through Newell and Simon's earliest papers about GPS (e.g., 1963). Lashley argued, in the face of behaviorist orthodoxy, that the chaining of stimuli and responses could not account for complex human behavioral phenomena such as fluent speech. Instead, he argued, it was necessary to postulate some kind of centralized processing, which he pictured as a holistic combination of analog signals in a tightly interconnected network of neurons. The seeds of the subsequent computational idea of plans lay in Lashley's contention that the serial order of complex behavioral sequences was predetermined by this centralized neural activity and not by the triggering effects of successive stimuli.
References
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 334-360
-
- Chapter
- Export citation
2 - Metaphor in practice
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 27-48
-
- Chapter
- Export citation
-
Summary
Levels of analysis
The Introduction has sketched the notion of a critical technical practice, explored the distinctive form of knowledge associated with AI, and described a reorientation of computational psychology from a focus on cognition to a focus on activity. It should be clear by now that I am proceeding on several distinct levels at once. It is time to systematize these levels and to provide some account of the theses I will be developing on each level.
On the reflexive level, one develops methods for analyzing the discourses and practices of technical work. Reflexive research cultivates a critical self-awareness, including itself among its objects of study and developing useful concepts for reflecting on the research as it is happening. To this end, I will begin by suggesting that technical language – that is, language used to investigate phenomena in the world by assimilating them to mathematics – is unavoidably metaphorical. My reflexive thesis is that predictable forms of trouble will beset any technical community that supposes its language to be precise and formally well defined. Awareness of the rhetorical properties of technical language greatly facilitates the interpretation of difficulties encountered in everyday technical practice. Indeed, I will proceed largely by diagnosing difficulties that have arisen from my own language – including language that I have inherited uncritically from the computational tradition, as well as the alternative language that I have fashioned as a potential improvement.
On the substantive level, one analyzes the discourses and practices of a particular technical discipline, namely AI. Chapter 1 has already outlined my substantive thesis, which has two parts.
13 - Pengi
- Philip E. Agre, University of California, San Diego
-
- Book:
- Computation and Human Experience
- Published online:
- 07 December 2009
- Print publication:
- 28 July 1997, pp 260-301
-
- Chapter
- Export citation
-
Summary
Argument
This chapter describes a computer program that illustrates some of the themes I have been developing. Before I discuss this program in detail, let me summarize the argument so far. Recalling the scheme laid out in Chapter 2, this argument has three levels: reflexive, substantive, and technical.
The reflexive argument has prescribed an awareness of the role of metaphor in technical work. As long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem. Critical technical work continually reflects on its substantive commitments, choosing research problems that might help bring unarticulated assumptions into the open. The technical exercises in this book are intended as examples of this process, and Chapter 14 will attempt to draw some lessons from them.
The substantive argument has four steps:
Chapter 2 described two contrasting metaphor systems for AI. Mentalist metaphors divide individual human beings into an inside and outside, with the attendant imagery of contents, boundaries, and movement into and out of the internal mental space. Interactionist metaphors, by contrast, focus on an individual's involvement in a world of familiar activities.
As Chapters 1 and 4 explained, mentalist metaphors have organized the vocabularies of both philosophical and computational theories of human nature for a long time, particularly under the influence of Descartes. This bias is only natural. Our daily activities have a vast background of unproblematic routine, but this background does its job precisely by not drawing attention to itself.